Learning Abduction under Partial Observability
نویسندگان
چکیده
Juba recently proposed a formulation of learning abductive reasoning from examples, in which both the relative plausibility of various explanations, as well as which explanations are valid, are learned directly from data. The main shortcoming of this formulation of the task is that it assumes access to full-information (i.e., fully specified) examples; relatedly, it offers no role for declarative background knowledge, as such knowledge is rendered redundant in the abduction task by complete information. In this work we extend the formulation to utilize such partially specified examples, along with declarative background knowledge about the missing data. We show that it is possible to use implicitly learned rules together with the explicitly given declarative knowledge to support hypotheses in the course of abduction. We also show how to use knowledge in the form of graphical causal models to refine the proposed hypotheses. Finally, we observe that when a small explanation exists, it is possible to obtain a muchimproved guarantee in the challenging exception-tolerant setting. Such small, human-understandable explanations are of particular interest for potential applications of the task.
منابع مشابه
Deep Decentralized Multi-task Multi-Agent Reinforcement Learning under Partial Observability
Many real-world tasks involve multiple agents with partial observability and limited communication. Learning is challenging in these settings due to local viewpoints of agents, which perceive the world as non-stationary due to concurrentlyexploring teammates. Approaches that learn specialized policies for individual tasks face problems when applied to the real world: not only do agents have to ...
متن کاملManifold Embeddings for Model-Based Reinforcement Learning under Partial Observability
Interesting real-world datasets often exhibit nonlinear, noisy, continuous-valued states that are unexplorable, are poorly described by first principles, and are only partially observable. If partial observability can be overcome, these constraints suggest the use of model-based reinforcement learning. We experiment with manifold embeddings to reconstruct the observable state-space in the conte...
متن کاملLTLf and LDLf Synthesis under Partial Observability
In this paper, we study synthesis under partial observability for logical specifications over finite traces expressed in LTLf /LDLf . This form of synthesis can be seen as a generalization of planning under partial observability in nondeterministic domains, which is known to be 2EXPTIMEcomplete. We start by showing that the usual “belief-state construction” used in planning under partial observ...
متن کاملQMDP-Net: Deep Learning for Planning under Partial Observability
This paper introduces the QMDP-net, a neural network architecture for planning under partial observability. The QMDP-net combines the strengths of model-free learning and model-based planning. It is a recurrent policy network, but it represents a policy for a parameterized set of tasks by connecting a model with a planning algorithm that solves the model, thus embedding the solution structure o...
متن کاملLearning Plans for Safety and Reachability Goals with Partial Observability
Traditional planning assumes reachability goals and/or full observability. In this paper, we propose a novel solution for safety and reachability planning with partial observability. Given a planning domain, a safety property, and a reachability goal, we automatically learn a safe and permissive plan to guide the planning domain so that the safety property is not violated and which can force th...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- CoRR
دوره abs/1711.04438 شماره
صفحات -
تاریخ انتشار 2017